76 research outputs found
Brief Announcement: Almost-Tight Approximation Distributed Algorithm for Minimum Cut
In this short paper, we present an improved algorithm for approximating the
minimum cut on distributed (CONGEST) networks. Let be the minimum
cut. Our algorithm can compute exactly in
\tilde{O}((\sqrt{n}+D)\poly(\lambda)) time, where is the number of nodes
(processors) in the network, is the network diameter, and hides
\poly\log n. By a standard reduction, we can convert this algorithm into a
-approximation \tilde{O}((\sqrt{n}+D)/\poly(\epsilon))-time
algorithm. The latter result improves over the previous
-approximation \tilde{O}((\sqrt{n}+D)/\poly(\epsilon))-time
algorithm of Ghaffari and Kuhn [DISC 2013]. Due to the lower bound of
by Das Sarma et al. [SICOMP 2013], this running
time is {\em tight} up to a \poly\log n factor. Our algorithm is an extremely
simple combination of Thorup's tree packing theorem [Combinatorica 2007],
Kutten and Peleg's tree partitioning algorithm [J. Algorithms 1998], and
Karger's dynamic programming [JACM 2000].Comment: To appear as a brief announcement at PODC 201
Equivalence Classes and Conditional Hardness in Massively Parallel Computations
The Massively Parallel Computation (MPC) model serves as a common abstraction of many modern large-scale data processing frameworks, and has been receiving increasingly more attention over the past few years, especially in the context of classical graph problems. So far, the only way to argue lower bounds for this model is to condition on conjectures about the hardness of some specific problems, such as graph connectivity on promise graphs that are either one cycle or two cycles, usually called the one cycle vs. two cycles problem. This is unlike the traditional arguments based on conjectures about complexity classes (e.g., P ? NP), which are often more robust in the sense that refuting them would lead to groundbreaking algorithms for a whole bunch of problems.
In this paper we present connections between problems and classes of problems that allow the latter type of arguments. These connections concern the class of problems solvable in a sublogarithmic amount of rounds in the MPC model, denoted by MPC(o(log N)), and some standard classes concerning space complexity, namely L and NL, and suggest conjectures that are robust in the sense that refuting them would lead to many surprisingly fast new algorithms in the MPC model. We also obtain new conditional lower bounds, and prove new reductions and equivalences between problems in the MPC model
Weighted Min-Cut: Sequential, Cut-Query and Streaming Algorithms
Consider the following 2-respecting min-cut problem. Given a weighted graph
and its spanning tree , find the minimum cut among the cuts that contain
at most two edges in . This problem is an important subroutine in Karger's
celebrated randomized near-linear-time min-cut algorithm [STOC'96]. We present
a new approach for this problem which can be easily implemented in many
settings, leading to the following randomized min-cut algorithms for weighted
graphs.
* An -time sequential algorithm:
This improves Karger's and bounds when the input graph is not extremely
sparse or dense. Improvements over Karger's bounds were previously known only
under a rather strong assumption that the input graph is simple [Henzinger et
al. SODA'17; Ghaffari et al. SODA'20]. For unweighted graphs with parallel
edges, our bound can be improved to .
* An algorithm requiring cut queries to compute the min-cut of
a weighted graph: This answers an open problem by Rubinstein et al. ITCS'18,
who obtained a similar bound for simple graphs.
* A streaming algorithm that requires space and
passes to compute the min-cut: The only previous non-trivial exact min-cut
algorithm in this setting is the 2-pass -space algorithm on simple
graphs [Rubinstein et al., ITCS'18] (observed by Assadi et al. STOC'19).
In contrast to Karger's 2-respecting min-cut algorithm which deploys
sophisticated dynamic programming techniques, our approach exploits some cute
structural properties so that it only needs to compute the values of cuts corresponding to removing pairs of tree edges, an
operation that can be done quickly in many settings.Comment: Updates on this version: (1) Minor corrections in Section 5.1, 5.2;
(2) Reference to newer results by GMW SOSA21 (arXiv:2008.02060v2), DEMN
STOC21 (arXiv:2004.09129v2) and LMN 21 (arXiv:2102.06565v1
A Faster Distributed Single-Source Shortest Paths Algorithm
We devise new algorithms for the single-source shortest paths (SSSP) problem
with non-negative edge weights in the CONGEST model of distributed computing.
While close-to-optimal solutions, in terms of the number of rounds spent by the
algorithm, have recently been developed for computing SSSP approximately, the
fastest known exact algorithms are still far away from matching the lower bound
of rounds by Peleg and Rubinovich [SIAM
Journal on Computing 2000], where is the number of nodes in the network
and is its diameter. The state of the art is Elkin's randomized algorithm
[STOC 2017] that performs rounds. We
significantly improve upon this upper bound with our two new randomized
algorithms for polynomially bounded integer edge weights, the first performing
rounds and the second performing rounds. Our bounds also compare favorably to the
independent result by Ghaffari and Li [STOC 2018]. As side results, we obtain a
-approximation -round algorithm for directed SSSP and a new work/depth trade-off for exact
SSSP on directed graphs in the PRAM model.Comment: Presented at the the 59th Annual IEEE Symposium on Foundations of
Computer Science (FOCS 2018
Faster Algorithms for Semi-Matching Problems
We consider the problem of finding \textit{semi-matching} in bipartite graphs
which is also extensively studied under various names in the scheduling
literature. We give faster algorithms for both weighted and unweighted case.
For the weighted case, we give an -time algorithm, where is
the number of vertices and is the number of edges, by exploiting the
geometric structure of the problem. This improves the classical
algorithms by Horn [Operations Research 1973] and Bruno, Coffman and Sethi
[Communications of the ACM 1974].
For the unweighted case, the bound could be improved even further. We give a
simple divide-and-conquer algorithm which runs in time,
improving two previous -time algorithms by Abraham [MSc thesis,
University of Glasgow 2003] and Harvey, Ladner, Lov\'asz and Tamir [WADS 2003
and Journal of Algorithms 2006]. We also extend this algorithm to solve the
\textit{Balance Edge Cover} problem in time, improving the
previous -time algorithm by Harada, Ono, Sadakane and Yamashita [ISAAC
2008].Comment: ICALP 201
Pre-Reduction Graph Products: Hardnesses of Properly Learning DFAs and Approximating EDP on DAGs
The study of graph products is a major research topic and typically concerns
the term , e.g., to show that . In this paper, we
study graph products in a non-standard form where is a
"reduction", a transformation of any graph into an instance of an intended
optimization problem. We resolve some open problems as applications.
(1) A tight -approximation hardness for the minimum
consistent deterministic finite automaton (DFA) problem, where is the
sample size. Due to Board and Pitt [Theoretical Computer Science 1992], this
implies the hardness of properly learning DFAs assuming (the
weakest possible assumption).
(2) A tight hardness for the edge-disjoint paths (EDP)
problem on directed acyclic graphs (DAGs), where denotes the number of
vertices.
(3) A tight hardness of packing vertex-disjoint -cycles for large .
(4) An alternative (and perhaps simpler) proof for the hardness of properly
learning DNF, CNF and intersection of halfspaces [Alekhnovich et al., FOCS 2004
and J. Comput.Syst.Sci. 2008]
- …